1 research outputs found
Fairness and Privacy in Federated Learning and Their Implications in Healthcare
Currently, many contexts exist where distributed learning is difficult or
otherwise constrained by security and communication limitations. One common
domain where this is a consideration is in Healthcare where data is often
governed by data-use-ordinances like HIPAA. On the other hand, larger sample
sizes and shared data models are necessary to allow models to better generalize
on account of the potential for more variability and balancing underrepresented
classes. Federated learning is a type of distributed learning model that allows
data to be trained in a decentralized manner. This, in turn, addresses data
security, privacy, and vulnerability considerations as data itself is not
shared across a given learning network nodes. Three main challenges to
federated learning include node data is not independent and identically
distributed (iid), clients requiring high levels of communication overhead
between peers, and there is the heterogeneity of different clients within a
network with respect to dataset bias and size. As the field has grown, the
notion of fairness in federated learning has also been introduced through novel
implementations. Fairness approaches differ from the standard form of federated
learning and also have distinct challenges and considerations for the
healthcare domain. This paper endeavors to outline the typical lifecycle of
fair federated learning in research as well as provide an updated taxonomy to
account for the current state of fairness in implementations. Lastly, this
paper provides added insight into the implications and challenges of
implementing and supporting fairness in federated learning in the healthcare
domain